What you need to know about artificial intelligence in armed conflict
The advance of artificial intelligence (AI) for military purposes raises profoundly worrying questions for humanity. We take a look at some of the key questions and concerns surrounding the use of AI, especially machine learning, in armed conflict.
What is artificial intelligence?
Artificial intelligence involves the use of computer systems to carry out tasks that would ordinarily require human cognition, planning or reasoning.
A well-known example is the AI system that underpins ChatGPT, but there are many others.
Algorithms are the foundation of an AI system. A traditional algorithm is a set of instructions, or rules, that a computer or machine must use to provide a response to a question or solve a problem.
Machine learning is a type of AI system that creates its own instructions based on the data on which it is ‘trained’. It then uses these instructions to generate a solution to a particular task. The software writes itself in a way. Recent advances in AI are in machine learning.
Some machine learning systems continue to ‘learn’ during their use for a particular task based on inputs from the environment in which they are operating.
The nature of machine learning means that the AI system does not always respond in the same way to the same input (unlike simple rule-based algorithms). The system will be unpredictable.
Another challenge is that machine learning systems are often a 'black box'. That means, even if the inputs are known, it can be very difficult to explain retrospectively why a system produced a particular output.
How could AI be deployed in armed conflicts?
Armed forces are investing heavily in AI and there are already examples of AI being deployed on the battlefield to inform military operations or as part of weapon systems.
The ICRC has highlighted three areas in which AI is being developed for use by armed actors in warfare, which raise significant questions from a humanitarian perspective:
- Integration in weapon systems, particularly autonomous weapon systems
- Use in cyber and information operations
- Underpinning military ‘decision support systems’
Autonomous weapon systems have received the most attention when it comes to the use of AI for military purposes. For example, concerns have been raised that AI could be used to directly trigger a strike against a person or a vehicle.
The ICRC has urged governments to adopt new international rules that would prohibit some autonomous weapons and restrict the use of others, including those controlled by AI.
What are autonomous weapons?
They're not the half-human, half-robot creations you've seen in films. pic.twitter.com/UXT1UXkeFR— ICRC UK & Ireland (@ICRC_uk) August 15, 2023
Somewhat less attention has been paid to the risks associated with the use of AI in cyber and information operations, as well as decision support systems – see sections below.
All these applications could result in harm to civilians if the international community fails to take a human-centred approach to how AI is used armed conflict.
How is AI used to inform military decisions?
A decision support system is any computerised tool that may use AI-based software to produce analyses to inform military decision-making.
These systems collect, analyse and combine data sources in order to, for example, identify people or objects, assess patterns of behaviour, make recommendations for military operations, or even make predictions about future actions or situations.
For example, an AI image recognition system might be used to help identify military objects by analysing drone footage, as well as other intelligence streams, to recommend targets for the military.
In other words, these AI systems can be used to inform decisions about who or what to attack and when. There have even been alarming suggestions that AI-based systems could inform military decision-making on the use of nuclear weapons.
Some argue that the use of decision support systems can help human decision-making in a way that facilitates compliance with international humanitarian law and minimizes the risks for civilians.
Others caution that an over-reliance on AI-generated outputs poses concerns for civilian protection and compliance with international humanitarian law, including the need to preserve human judgement in legal decisions, especially given the opaque and biased nature of many of today’s machine learning systems.
How could AI be used in cyber and information warfare?
AI is expected to change both how actors defend against and launch cyber-attacks.
For example, systems with AI and machine learning capabilities could automatically search for vulnerabilities in enemy systems to exploit, while also detecting weaknesses in their own systems. When coming under attack, they could simultaneously and automatically launch counter-attacks.
These types of developments could increase the scale of cyber-attacks, while also changing their nature and severity, especially in terms of adverse impact on civilians and civilian infrastructure.
Information warfare has long been a part of conflicts. But the digital battlefield and AI have changed how information and disinformation is spread and how disinformation is created.
AI-enabled systems have been widely used to produce fake content – text, audio, photos and video – which is increasingly difficult to distinguish from genuine information.
Not all forms of information warfare involve AI and machine learning, but these technologies seem set to change the nature and scale of how information is manipulated, as well as the real-world consequences.
What are ICRC’s concerns around AI and machine learning in armed conflict?
The use of AI and machine learning in armed conflict has important humanitarian, legal, ethical and security implications.
With rapid developments in AI being integrated into military systems, it is crucial that states address specific risks for people affected by armed conflict.
Although there are a wide range of implications to consider, specific risks include the following:
- An increase in the dangers posed by autonomous weapons;
- Greater harm to civilians and civilian infrastructure from cyber operations and information warfare;
- A negative impact on the quality of human decision-making in military settings.
It is important that states preserve effective human control and judgement in the use of AI, including machine learning, for tasks or decisions that could have serious consequences on human life.
Legal obligations and ethical responsibilities in war must not be outsourced to machines and software.
What is the ICRC’s message for the international community?
It is critical that the international community takes a genuinely human-centred approach to the development and use of AI in places affected by conflict.
This starts with considering the obligations and responsibilities of humans and what is required to ensure that the use of these technologies is compatible with international law, as well as societal and ethical values.
From our perspective, conversations around military uses of AI and machine learning, and any additional rules, regulations or limits that are developed, need to reflect and strengthen the existing obligations under international law, in particular international humanitarian law.
This piece was published by the ICRC team in the UK and Ireland. Find out more about our work and follow us on Twitter.